fairness requirement
- North America > United States (0.05)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- Asia > China > Shaanxi Province > Xi'an (0.04)
Policy Optimization with Advantage Regularization for Long-Term Fairness in Decision Systems
Long-term fairness is an important factor of consideration in designing and deploying learning-based decision systems in high-stake decision-making contexts. Recent work has proposed the use of Markov Decision Processes (MDPs) to formulate decision-making with long-term fairness requirements in dynamically changing environments, and demonstrated major challenges in directly deploying heuristic and rule-based policies that worked well in static environments. We show that policy optimization methods from deep reinforcement learning can be used to find strictly better decision policies that can often achieve both higher overall utility and less violation of the fairness requirements, compared to previously-known strategies. In particular, we propose new methods for imposing fairness requirements in policy optimization by regularizing the advantage evaluation of different actions. Our proposed methods make it easy to impose fairness constraints without reward engineering or sacrificing training efficiency. We perform detailed analyses in three established case studies, including attention allocation in incident monitoring, bank loan approval, and vaccine distribution in population networks.
A Gray Literature Study on Fairness Requirements in AI-enabled Software Engineering
Nguyen, Thanh, Boufaied, Chaima, Santos, Ronnie de Souza
Today, with the growing obsession with applying Artificial Intelligence (AI), particularly Machine Learning (ML), to software across various contexts, much of the focus has been on the effectiveness of AI models, often measured through common metrics such as F1- score, while fairness receives relatively little attention. This paper presents a review of existing gray literature, examining fairness requirements in AI context, with a focus on how they are defined across various application domains, managed throughout the Software Development Life Cycle (SDLC), and the causes, as well as the corresponding consequences of their violation by AI models. Our gray literature investigation shows various definitions of fairness requirements in AI systems, commonly emphasizing non-discrimination and equal treatment across different demographic and social attributes. Fairness requirement management practices vary across the SDLC, particularly in model training and bias mitigation, fairness monitoring and evaluation, and data handling practices. Fairness requirement violations are frequently linked, but not limited, to data representation bias, algorithmic and model design bias, human judgment, and evaluation and transparency gaps. The corresponding consequences include harm in a broad sense, encompassing specific professional and societal impacts as key examples, stereotype reinforcement, data and privacy risks, and loss of trust and legitimacy in AI-supported decisions. These findings emphasize the need for consistent frameworks and practices to integrate fairness into AI software, paying as much attention to fairness as to effectiveness.
- North America > Canada > Alberta > Census Division No. 6 > Calgary Metropolitan Region > Calgary (0.14)
- Asia (0.04)
- Overview (1.00)
- Research Report > New Finding (0.69)
- Law (0.93)
- Education (0.93)
- Information Technology (0.68)
- Health & Medicine > Therapeutic Area (0.46)
FairAgent: Democratizing Fairness-Aware Machine Learning with LLM-Powered Agents
Dai, Yucong, Zhang, Lu, Luo, Feng, Chowdhury, Mashrur, Wu, Yongkai
Training fair and unbiased machine learning models is crucial for high-stakes applications, yet it presents significant challenges. Effective bias mitigation requires deep expertise in fairness definitions, metrics, data preprocessing, and machine learning techniques. In addition, the complex process of balancing model performance with fairness requirements while properly handling sensitive attributes makes fairness-aware model development inaccessible to many practitioners. To address these challenges, we introduce FairAgent, an LLM-powered automated system that significantly simplifies fairness-aware model development. FairAgent eliminates the need for deep technical expertise by automatically analyzing datasets for potential biases, handling data preprocessing and feature engineering, and implementing appropriate bias mitigation strategies based on user requirements. Our experiments demonstrate that FairAgent achieves significant performance improvements while significantly reducing development time and expertise requirements, making fairness-aware machine learning more accessible to practitioners.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Oceania > Australia > Western Australia > Perth (0.04)
- Oceania > Australia > Victoria > Melbourne (0.04)
- (13 more...)
- North America > United States > California > San Diego County > San Diego (0.04)
- North America > Canada > British Columbia > Vancouver (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- Asia > China > Shaanxi Province > Xi'an (0.04)
Policy Optimization with Advantage Regularization for Long-Term Fairness in Decision Systems
Long-term fairness is an important factor of consideration in designing and deploying learning-based decision systems in high-stake decision-making contexts. Recent work has proposed the use of Markov Decision Processes (MDPs) to formulate decision-making with long-term fairness requirements in dynamically changing environments, and demonstrated major challenges in directly deploying heuristic and rule-based policies that worked well in static environments. We show that policy optimization methods from deep reinforcement learning can be used to find strictly better decision policies that can often achieve both higher overall utility and less violation of the fairness requirements, compared to previously-known strategies. In particular, we propose new methods for imposing fairness requirements in policy optimization by regularizing the advantage evaluation of different actions. Our proposed methods make it easy to impose fairness constraints without reward engineering or sacrificing training efficiency.
Fairness in Matching under Uncertainty
Devic, Siddartha, Kempe, David, Sharan, Vatsal, Korolova, Aleksandra
Systems based on algorithms and machine learning are increasingly used to guide or outright make decisions which strongly impact human lives; thus it is imperative to take fairness into account when designing such systems. Notions of fairness in computer science can be classified into those that try to capture fairness towards a group (Hardt et al., 2016; Hébert-Johnson et al., 2018; Kearns et al., 2018; Kleinberg et al., 2017) vs. those that try to be fair to each individual Dwork et al. (2012); Kim et al. (2018, 2020). In our work, we focus on the latter notion. The most widely studied notion of individual fairness is due to the seminal work of Dwork et al. (2012): it assumes that a metric space on observable features of individuals captures similarity, and requires that outcomes of a resource allocation mechanism satisfy a certain Lipschitz continuity condition with respect to the given metric. Intuitively, this ensures that individuals who are similar according to the metric will be treated similarly by the mechanism. We consider a setting in which individuals have preferences over the outcomes of the resource allocation mechanism, focusing on the important setting of two-sided markets. Applications of this setting abound: matching students to schools, job fair participants to interviews, doctors to hospitals, patients to treatments, drivers to passengers in ride hailing, or advertisers to ad slots/users in online advertising (Abdulkadiroğlu and Sönmez, 2003; Bronfman et al., 2015; Mehta et al., 2013; Roth, 1986; Roth et al., 2007), to name a
- North America > United States > California > Los Angeles County > Los Angeles (0.28)
- Asia > Middle East > Jordan (0.04)
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- (2 more...)
- Health & Medicine (1.00)
- Education (1.00)
- Marketing (0.87)
- (3 more...)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Personal Assistant Systems (0.67)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.67)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty (0.50)
Learning Density-Based Correlated Equilibria for Markov Games
Zhang, Libo, Chen, Yang, Takisaka, Toru, Khoussainov, Bakh, Witbrock, Michael, Liu, Jiamou
Correlated Equilibrium (CE) is a well-established solution concept that captures coordination among agents and enjoys good algorithmic properties. In real-world multi-agent systems, in addition to being in an equilibrium, agents' policies are often expected to meet requirements with respect to safety, and fairness. Such additional requirements can often be expressed in terms of the state density which measures the state-visitation frequencies during the course of a game. However, existing CE notions or CE-finding approaches cannot explicitly specify a CE with particular properties concerning state density; they do so implicitly by either modifying reward functions or using value functions as the selection criteria. The resulting CE may thus not fully fulfil the state-density requirements. In this paper, we propose Density-Based Correlated Equilibria (DBCE), a new notion of CE that explicitly takes state density as selection criterion. Concretely, we instantiate DBCE by specifying different state-density requirements motivated by real-world applications. To compute DBCE, we put forward the Density Based Correlated Policy Iteration algorithm for the underlying control problem. We perform experiments on various games where results demonstrate the advantage of our CE-finding approach over existing methods in scenarios with state-density concerns.
- Oceania > New Zealand > North Island > Auckland Region > Auckland (0.04)
- Asia > China (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- Asia > Japan > Honshū > Chūgoku > Hiroshima Prefecture > Hiroshima (0.04)
Can AI be fair?
WELCOME to the future, where artificial intelligence (AI) systems augment, automate or replace human decision-making. Imagine applying for a bank loan through an online system; you key in all pertinent information and almost instantaneously, the system informs you that you do not qualify for a loan. Coincidentally, you know that a friend with a profile very similar to yours got his loan approved by the same system. Let's look at a second scenario: you decide to look for a new job – you send your resume to an online hiring system that immediately tells you you're not a right fit. A peer of yours who you think does not qualify has better luck.
- North America > United States (0.05)
- Asia > China (0.05)
Characterization of Group-Fair Social Choice Rules under Single-Peaked Preferences
Sreedurga, Gogulapati, Sadhukhan, Soumyarup, Roy, Souvik, Narahari, Yadati
We study fairness in social choice settings under single-peaked preferences. Construction and characterization of social choice rules in the single-peaked domain has been extensively studied in prior works. In fact, in the single-peaked domain, it is known that unanimous and strategy-proof deterministic rules have to be min-max rules and those that also satisfy anonymity have to be median rules. Further, random social choice rules satisfying these properties have been shown to be convex combinations of respective deterministic rules. We non-trivially add to this body of results by including fairness considerations in social choice. Our study directly addresses fairness for groups of agents. To study group-fairness, we consider an existing partition of the agents into logical groups, based on natural attributes such as gender, race, and location. To capture fairness within each group, we introduce the notion of group-wise anonymity. To capture fairness across the groups, we propose a weak notion as well as a strong notion of fairness. The proposed fairness notions turn out to be natural generalizations of existing individual-fairness notions and moreover provide non-trivial outcomes for strict ordinal preferences, unlike the existing group-fairness notions. We provide two separate characterizations of random social choice rules that satisfy group-fairness: (i) direct characterization (ii) extreme point characterization (as convex combinations of fair deterministic social choice rules). We also explore the special case where there are no groups and provide sharper characterizations of rules that achieve individual-fairness.
- North America > United States (0.04)
- Europe (0.04)
- Asia > India > West Bengal > Kolkata (0.04)